A piecewise conservative method for unconstrained convex optimization

نویسندگان

چکیده

We consider a continuous-time optimization method based on dynamical system, where massive particle starting at rest moves in the conservative force field generated by objective function, without any kind of friction. formulate restart criterion mean dissipation kinetic energy, and we prove global convergence result for strongly-convex functions. Using Symplectic Euler discretization scheme, obtain an iterative algorithm. have considered discrete but also introduced new procedure ensuring each iteration decrease function greater than one achieved step classical gradient method. For algorithm, this last is capable guaranteeing result. apply same scheme to Nesterov Accelerated Gradient (NAG-C), use restarted NAG-C as benchmark numerical experiments. In smooth convex problems considered, our shows faster rate NAG-C. propose extension algorithm composite optimization: tests involving non-strongly functions with $\ell^1$-regularization, it has better performances well known efficient Fast Iterative Shrinkage-Thresholding Algorithm, accelerated adaptive scheme.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized Newton method for unconstrained convex optimization

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...

متن کامل

Accelerated Regularized Newton Method for Unconstrained Convex Optimization∗

We consider a global complexity bound of regularized Newton methods for the unconstrained convex optimization. The global complexity bound is an upper bound of the number of iterations required to get an approximate solution x such that f(x)− inf f(y) ≤ ε, where ε is a given positive constant. Recently, Ueda and Yamashita proposed the regularized Newton method whose global complexity bound is O...

متن کامل

network optimization with piecewise linear convex costs

the problem of finding the minimum cost multi-commodity flow in an undirected and completenetwork is studied when the link costs are piecewise linear and convex. the arc-path model and overflowmodel are presented to formulate the problem. the results suggest that the new overflow model outperformsthe classical arc-path model for this problem. the classical revised simplex, frank and wolf and a ...

متن کامل

A dimension-reducing method for unconstrained optimization

A new method for unconstrained optimization in •" is presented. This method reduces the dimension of the problem in such a way that it can lead to an iterative approximate formula for the computation of (n 1) components of the optimum while its remaining component is computed separately using the final approximations of the other components. It converges quadratically to a local optimum and it ...

متن کامل

A Penalized Quadratic Convex Reformulation Method for Random Quadratic Unconstrained Binary Optimization

The Quadratic Convex Reformulation (QCR) method is used to solve quadratic unconstrained binary optimization problems. In this method, the semidefinite relaxation is used to reformulate it to a convex binary quadratic program which is solved using mixed integer quadratic programming solvers. We extend this method to random quadratic unconstrained binary optimization problems. We develop a Penal...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2021

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-021-00332-0